W

MCP Server — Writing Standards Agentic Knowledge Pack

Writing Standards Skill for MCP Agent

Content creation, editing, and quality assurance. Copywriting frameworks, structured editing methodology, content strategy, social content, and AI writing pattern detection.

Available free v1.0.0 LLM
$ sidebutton install Writing Standards
Download ZIP
README 8.8 KB

Writing Standards

Production-grade content creation, editing, and quality assurance for marketing websites. Provides copywriting frameworks, structured editing methodology, content strategy planning, social content creation, and AI writing pattern detection.

This pack is brand-agnostic. It works with any website or product. Brand-specific voice, tone, and constraints are provided by the consumer at runtime via a brand-context.md file.

Brand Context Protocol

Before any content task, the consumer must provide a brand-context.md containing:

  • Site identity — name, one-liner, target audience
  • Voice rules — tone, personality, formality level (casual / professional / formal)
  • Words to use — approved terminology, brand language
  • Words to avoid — banned terms, competitor names, off-brand phrases
  • Proof points — key metrics, customer quotes, case study data
  • Content types — what the site publishes (blog, landing pages, docs, social, email)
  • Visual style — design constraints that affect copy (dark theme, minimal, data-heavy)

If no brand context is provided, ask for one before writing. Without it, copy will be generic and miss the brand's voice.

Content Taxonomy

TypePurposeKey Constraint
Landing pageConvert visitors → actionOne CTA per page, benefits over features
Blog postEducate, build authority, drive organic trafficSearchable: keyword-driven, structured for SEO
Case studyProve value with real outcomesData-backed, customer-approved quotes
DocumentationHelp users succeedScannable, task-oriented, no marketing tone
FAQHandle objections, reduce support loadReal questions from real users, concise answers
EmailNurture leads, retain usersSubject line is 80% of the work, one action per email
Social postBuild audience, drive traffic, establish voicePlatform-specific format and tone
Ad copyCapture attention, drive clicksCharacter limits, clear value prop, strong CTA
Comparison pageWin competitive evaluationsHonest, specific, never disparage competitors

Module Catalog

ModulePurposeKey Frameworks
copywritingWrite new marketing copyHeadline formulas, CTA patterns, page-type guidance, voice handling
copy-editingReview and polish existing copyNine Sweeps structured editing, quick-pass checks
content-strategyPlan what to write and whySearchable vs Shareable, content pillars, buyer-stage keywords
social-contentCreate platform-specific social postsHook formulas, repurposing system, content calendar
writing-qualityDetect and eliminate AI writing patterns29-pattern detection, 5-dimension scoring, content-type-aware thresholds

Loading Order

  1. This file (_skill.md) — pack overview, brand context protocol
  2. Consumer's brand-context.md — site-specific voice and constraints
  3. Role file (_roles/writer.md or _roles/editor.md) — task lifecycle
  4. Module _skill.md — domain-specific methodology
  5. Module references/ — detailed frameworks, loaded on demand

Cross-Module Dependencies

  • writing-quality is referenced by both copy-editing (as the Anti-AI-Slop sweep) and by the writer role (as the final quality check before submission)
  • copywriting and copy-editing are complementary: use copywriting to draft, copy-editing to review
  • content-strategy informs what to write; copywriting and social-content handle the actual writing

Workflow Catalog

WorkflowModulePurpose
writing_quality_checkwriting-qualityScore content on 5 dimensions, detect AI patterns, content-type-aware pass/fail verdict

Provenance & Confidence

Overall: ~85% GEN / 15% RES | Pack total: ~3,700 lines across 22 files

Only writing-quality has external source attribution. All other modules are generated domain expertise without cited research.

Confidence rule: 0.49 max. GEN-only modules set to 0.24 (at least 2x lower than RES-backed modules).

Calibration status: Scoring calibrated against 5 gold-standard landing pages (Stripe, Linear, Basecamp, Vercel, Notion). Content-type-aware thresholds: 28/50 for landing pages, 35/50 for prose.

GEN / RES per Module

                         0%       25%       50%       75%      100%
                         │         │         │         │         │
writing-quality (0.49)   █████████████▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓  33% GEN / 67% RES
copywriting     (0.24)   ████████████████████████████████████████  100% GEN
copy-editing    (0.24)   ████████████████████████████████████████  100% GEN
content-strategy(0.24)   ████████████████████████████████████████  100% GEN
social-content  (0.24)   ████████████████████████████████████████  100% GEN
roles           (0.24)   ████████████████████████████████████████  100% GEN
                         │         │         │         │         │
                         ██ GEN (no source)    ▓▓ RES (cited)

Source Attribution

SourceLicenseContributes toLines
blader/humanizerMITwriting-quality: 29 AI patterns from Wikipedia WikiProject AI Cleanup217
hardikpandya/stop-slopMITwriting-quality: 5-dimension scoring, banned phrases/structures, 8 core rules373
(none)copywriting, copy-editing, content-strategy, social-content, roles~3,100

Calibration Results

Tested against 5 recognized landing pages (v3, after content-type fixes):

SiteScoreVerdictKey finding
stripe.com27REVISE (-1)Strong proof but generic promo language
linear.app22REVISE (-6)Unsourced claims, repetitive positioning
basecamp.com26REVISE (-2)Strongest voice, em-dash overuse
vercel.com25REVISE (-3)Unattributed metrics
notion.com28PASSMost named proof points
aictpo.com28PASSDirect, specific, trust is the weak dimension

Later Improvements

Calibration Gaps (from v3 testing)

Scoring threshold: 3 of 5 gold-standard sites fail by 1-3 points. Consider lowering landing-page threshold to 25 or adding a "borderline pass" band at 25-27.

Trust dimension still too strict: Even with the "specific and verifiable" adjustment, the tool flags named metrics (e.g., Stripe's "US$1.9tn") as needing inline sources. Consider a further exception: metrics attributed to the site's own company/product are inherently first-party and need no external citation.

Negative calibration missing: No test against known-bad copy to confirm the tool catches it. Add 2-3 generic AI SaaS pages as negative test cases (expected score <22).

Research Targets (GEN → RES)

High priority (100% GEN, high impact):

  • copy-editing: Nine Sweeps methodology — cite editing frameworks (Ann Handley Everybody Writes, Zinsser On Writing Well, AP Stylebook)
  • copywriting: headline formulas — cite Copyhackers (Joanna Wiebe), published conversion research, Unbounce landing page studies
  • content-strategy: prioritization scoring — cite HubSpot Content Strategy, Animalz research, Orbit Media annual blogging survey
  • writing-quality: scoring.md rubric thresholds — anchor to published readability research (Flesch-Kincaid, Hemingway, Contently scoring)

Medium priority:

  • copywriting: CTA copy guidelines — cite ConversionXL button copy studies, Unbounce CTA research
  • content-strategy: topic cluster methodology — cite HubSpot pillar/cluster model documentation
  • social-content: hook formulas — cite LinkedIn algorithm research (Richard van der Blom studies), Justin Welsh methodology
  • social-content: platform algorithm behavior — cite platform documentation, Hootsuite/Buffer research reports

Low priority (experiential, hard to cite):

  • All Tips and Gotchas sections — domain expertise by nature
  • Roles (writer, editor) — process-oriented, not fact-oriented
  • Plain English Alternatives reference — widely known substitutions, no single authoritative source

AI Agent Roles